31 research outputs found
Self-Supervised Learning for Domain Adaptation on Point-Clouds
Self-supervised learning (SSL) allows to learn useful representations from
unlabeled data and has been applied effectively for domain adaptation (DA) on
images. It is still unknown if and how it can be leveraged for domain
adaptation for 3D perception. Here we describe the first study of SSL for DA on
point clouds. We introduce a new family of pretext tasks, \textit{Deformation
Reconstruction}, motivated by the deformations encountered in sim-to-real
transformations. The key idea is to deform regions of the input shape and use a
neural network to reconstruct them. We design three types of shape deformation
methods: (1) \textit{Volume-based:} shape deformation based on proximity in the
input space; (2) \textit{Feature-based:} deforming regions in the shape that
are semantically similar; and (3) \textit{Sampling-based:} shape deformation
based on three simple sampling schemes. As a separate contribution, we also
develop a new method based on the Mixup training procedure for point-clouds.
Evaluations on six domain adaptations across synthetic and real furniture data,
demonstrate large improvement over previous work